01. Congratulations!
Congratulations!
You have reached the end of the first part of the nanodegree and now have a strong foundational background in reinforcement learning. You should be proud of all of your hard work! Take the time to celebrate your accomplishment!
Over the next couple of months, you will dive deeply into cutting-edge deep reinforcement learning and build much more!
Arpan Rollercoaster
### Where have we been?
In the first several lessons, you built a strong foundation in reinforcement learning, by learning how to solve finite Markov Decision Processes (MDPs) where the number of states and actions is limited. For instance, you wrote your own implementations of many tabular solution methods such as Q-Learning and Expected Sarsa, among other algorithms.
Then, you learned about how to generalize these algorithms to work with large and continuous spaces. Using techniques such as tile coding and coarse coding, you can expand the size of the problems that can be solved with traditional reinforcement learning techniques.
As you learned in the previous lesson, this lays the foundation for developing deep reinforcement learning algorithms.
### Where are we going?
Deep Reinforcement Learning is a relatively recent term that refers to approaches that use deep learning (mainly multi-layer neural networks) to solve reinforcement learning problems.
In the next part of the nanodegree, you'll learn all about Value-Based Methods (such as the Deep Q-Learning algorithm) that use a neural network in place of the Q-table (estimated optimal action-value function) from traditional algorithms.
)](img/dqn.png)
Visualization of Deep Q-Network (DQN) applied to Atari game. (Source)